首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   47510篇
  免费   8371篇
  国内免费   4967篇
工业技术   60848篇
  2024年   231篇
  2023年   1356篇
  2022年   2260篇
  2021年   2476篇
  2020年   2402篇
  2019年   1851篇
  2018年   1571篇
  2017年   2052篇
  2016年   2195篇
  2015年   2525篇
  2014年   3897篇
  2013年   3324篇
  2012年   4113篇
  2011年   4328篇
  2010年   3245篇
  2009年   3189篇
  2008年   3243篇
  2007年   3507篇
  2006年   2804篇
  2005年   2356篇
  2004年   1745篇
  2003年   1424篇
  2002年   1054篇
  2001年   728篇
  2000年   595篇
  1999年   449篇
  1998年   372篇
  1997年   271篇
  1996年   270篇
  1995年   185篇
  1994年   126篇
  1993年   114篇
  1992年   110篇
  1991年   91篇
  1990年   77篇
  1989年   38篇
  1988年   46篇
  1987年   24篇
  1986年   30篇
  1985年   26篇
  1984年   25篇
  1983年   23篇
  1982年   17篇
  1981年   20篇
  1980年   17篇
  1979年   7篇
  1978年   6篇
  1976年   4篇
  1975年   4篇
  1951年   4篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
With the recent developments in the Internet of Things (IoT), the amount of data collected has expanded tremendously, resulting in a higher demand for data storage, computational capacity, and real-time processing capabilities. Cloud computing has traditionally played an important role in establishing IoT. However, fog computing has recently emerged as a new field complementing cloud computing due to its enhanced mobility, location awareness, heterogeneity, scalability, low latency, and geographic distribution. However, IoT networks are vulnerable to unwanted assaults because of their open and shared nature. As a result, various fog computing-based security models that protect IoT networks have been developed. A distributed architecture based on an intrusion detection system (IDS) ensures that a dynamic, scalable IoT environment with the ability to disperse centralized tasks to local fog nodes and which successfully detects advanced malicious threats is available. In this study, we examined the time-related aspects of network traffic data. We presented an intrusion detection model based on a two-layered bidirectional long short-term memory (Bi-LSTM) with an attention mechanism for traffic data classification verified on the UNSW-NB15 benchmark dataset. We showed that the suggested model outperformed numerous leading-edge Network IDS that used machine learning models in terms of accuracy, precision, recall and F1 score.  相似文献   
992.
Blockchain merges technology with the Internet of Things (IoT) for addressing security and privacy-related issues. However, conventional blockchain suffers from scalability issues due to its linear structure, which increases the storage overhead, and Intrusion detection performed was limited with attack severity, leading to performance degradation. To overcome these issues, we proposed MZWB (Multi-Zone-Wise Blockchain) model. Initially, all the authenticated IoT nodes in the network ensure their legitimacy by using the Enhanced Blowfish Algorithm (EBA), considering several metrics. Then, the legitimately considered nodes for network construction for managing the network using Bayesian-Direct Acyclic Graph (B-DAG), which considers several metrics. The intrusion detection is performed based on two tiers. In the first tier, a Deep Convolution Neural Network (DCNN) analyzes the data packets by extracting packet flow features to classify the packets as normal, malicious, and suspicious. In the second tier, the suspicious packets are classified as normal or malicious using the Generative Adversarial Network (GAN). Finally, intrusion scenario performed reconstruction to reduce the severity of attacks in which Improved Monkey Optimization (IMO) is used for attack path discovery by considering several metrics, and the Graph cut utilized algorithm for attack scenario reconstruction (ASR). UNSW-NB15 and BoT-IoT utilized datasets for the MZWB method simulated using a Network simulator (NS-3.26). Compared with previous performance metrics such as energy consumption, storage overhead accuracy, response time, attack detection rate, precision, recall, and F-measure. The simulation result shows that the proposed MZWB method achieves high performance than existing works  相似文献   
993.
Sensors produce a large amount of multivariate time series data to record the states of Internet of Things (IoT) systems. Multivariate time series timestamp anomaly detection (TSAD) can identify timestamps of attacks and malfunctions. However, it is necessary to determine which sensor or indicator is abnormal to facilitate a more detailed diagnosis, a process referred to as fine-grained anomaly detection (FGAD). Although further FGAD can be extended based on TSAD methods, existing works do not provide a quantitative evaluation, and the performance is unknown. Therefore, to tackle the FGAD problem, this paper first verifies that the TSAD methods achieve low performance when applied to the FGAD task directly because of the excessive fusion of features and the ignoring of the relationship’s dynamic changes between indicators. Accordingly, this paper proposes a multivariate time series fine-grained anomaly detection (MFGAD) framework. To avoid excessive fusion of features, MFGAD constructs two sub-models to independently identify the abnormal timestamp and abnormal indicator instead of a single model and then combines the two kinds of abnormal results to detect the fine-grained anomaly. Based on this framework, an algorithm based on Graph Attention Neural Network (GAT) and Attention Convolutional Long-Short Term Memory (A-ConvLSTM) is proposed, in which GAT learns temporal features of multiple indicators to detect abnormal timestamps and A-ConvLSTM captures the dynamic relationship between indicators to identify abnormal indicators. Extensive simulations on a real-world dataset demonstrate that the proposed algorithm can achieve a higher F1 score and hit rate than the extension of existing TSAD methods with the benefit of two independent sub-models for timestamp and indicator detection.  相似文献   
994.
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region.  相似文献   
995.
996.
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous detection model that dynamically trains small subsets to these issues. First, this research introduces a deep neural network (DNN)-based GANomaly for semi-supervised learning. Second, this paper presents the proposed adaptive algorithm for the DNN-based GANomaly, which is validated with four subsets of the adaptive dataset. Finally, this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations, reconstruction error visualization, and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage, semi-supervised learning, and adaptive learning. Compared to other single-class classification techniques, the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13% and 8% of F1 scores and 4.17% and 11.51% for accuracy, respectively. Furthermore, experiments of the proposed adaptive learning reveal mostly improved results over the initial values. An analysis and monitoring system based on the combination of the three explainable methodologies is also described. Thus, the proposed method has the potential advantages to be applied in practical industry, and future research will explore handling unbalanced real-time datasets in various scenarios.  相似文献   
997.
为掌握农村多水源供水系统运行状态,诊断运行中存在的问题,以便及时改进,测定了宿迁市南蔡乡供水系统水量、水压和能耗,分析存在问题,提出改进意见。采用在线水位计、累积式水表、电接点压力表连续测定水源水位、供水流量和出厂水压,计算各水源供水能耗,提出并实施了降低能耗的措施;分析管网用水规律和漏水量。采用U盘式数字压力表连续监测管网运行压力,分析管网水压的区域分布和时间变化规律,提出保障供水水压、降低过高供水压力的建议。  相似文献   
998.
通过水利工程实例,介绍了振冲碎石桩的施工原理、施工流程、常见问题、处理措施及桩体质量检测方法,数据详实,方法得当,对施工起到了很好的指导作用。施工完成后,经过处理的地基承载力提高比较明显,施工效果较好,为类似工程施工提供了借鉴。  相似文献   
999.
本文介绍了中水电如东海上风电场(潮间带)100MW示范项目升压站综合楼静载荷试验异常情况分析与处理情况,提出了施工中应进行实时监测等需注意的问题。  相似文献   
1000.
Motivated by recent applications of wireless sensor networks in monitoring infrastructure networks, we address the problem of optimal coverage of infrastructure networks using sensors whose sensing performance decays with distance. We show that this problem can be formulated as a continuous p-median problem on networks. The literature has addressed the discrete p-median problem   on networks and in continuum domains, and the continuous pp-median problem in continuum domains extensively. However, in-depth analysis of the continuous pp-median problem on networks has been lacking. With the sensing performance model that decays with distance, each sensor covers a region equivalent to its Voronoi partition on the network in terms of the shortest path distance metric. Using Voronoi partitions, we define a directional partial derivative of the coverage metric with respect to a sensor’s location. We then propose a gradient descent algorithm to obtain a locally optimal solution with guaranteed convergence. The quality of an optimal solution depends on the choice of the initial configuration of sensors. We obtain an initial configuration using two approaches: by solving the discrete pp-median problem on a lumped   network and by random sampling. We consider two methods of random sampling: uniform sampling and D2D2-sampling. The first approach with the initial solution of the discrete pp-median problem leads to the best coverage performance for large networks, but at the cost of high running time. We also observe that the gradient descent on the initial solution with the D2D2-sampling method yields a solution that is within at most 7% of the previous solution and with much shorter running time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号